Goto

Collaborating Authors

 gaussian membership inference privacy


Gaussian Membership Inference Privacy

Neural Information Processing Systems

We propose a novel and practical privacy notion called $f$-Membership Inference Privacy ($f$-MIP), which explicitly considers the capabilities of realistic adversaries under the membership inference attack threat model. Consequently, $f$-MIP offers interpretable privacy guarantees and improved utility (e.g., better classification accuracy). In particular, we derive a parametric family of $f$-MIP guarantees that we refer to as $\mu$-Gaussian Membership Inference Privacy ($\mu$-GMIP) by theoretically analyzing likelihood ratio-based membership inference attacks on stochastic gradient descent (SGD). Our analysis highlights that models trained with standard SGD already offer an elementary level of MIP. Additionally, we show how $f$-MIP can be amplified by adding noise to gradient updates.


Gaussian Membership Inference Privacy

Neural Information Processing Systems

We propose a novel and practical privacy notion called f -Membership Inference Privacy ( f -MIP), which explicitly considers the capabilities of realistic adversaries under the membership inference attack threat model. Consequently, f -MIP offers interpretable privacy guarantees and improved utility (e.g., better classification accuracy). In particular, we derive a parametric family of f -MIP guarantees that we refer to as \mu -Gaussian Membership Inference Privacy ( \mu -GMIP) by theoretically analyzing likelihood ratio-based membership inference attacks on stochastic gradient descent (SGD). Our analysis highlights that models trained with standard SGD already offer an elementary level of MIP. Additionally, we show how f -MIP can be amplified by adding noise to gradient updates.